DeepNet: an ultrafast neural learning code for seismic imaging

نویسندگان

  • Jacob Barhen
  • David B. Reister
  • Vladimir A. Protopopescu
چکیده

A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools. virtual input layer allows for a specific type of preprocessing of the input vectors. The associated computations between the actual and virtual input layers differ from the usual inter-layer neural network computations. 1. Background and purpose The ability to accurately predict the location of remaining oil in the neighborhood of existing production wells is of vital economic importance to the petroleum industry. For practical purposes, one typically targets volumes of fluid 10 meters thick and 200 meters in lateral extent at a distance of 200 meters from each well, requiring a resolution accuracy of 5% in terms of the distance from the observation well. Available oilfield information incorporates many datasets with different scales, uncertainties, sample volumes, and relevance. Well logs (e.g., porosity, gamma ray response, and resistivity) provide the most accurate possible sensor-based characterization of the geological formations encountered along the path of a well [l]. On the other hand, lowresolution seismic dam are generally used to conduct large-scale field assessments [2]. The specific focus of the research we report in this paper was to develop a methodology that would enable fast and accurate prediction of well pseudo logs from seismic data across an entire oil field. 2. Network Architecture We consider a generalized multilayer feed-forward neural network. In general, the nodes (neurons) are organized in layers, namely: (i) input, (ii) (one or several) hidden, and (iii) output. Ehch layer 1 contains N, nodes. In addition to these traditional layers, we introduce a virtual input layer between the input layer and the (first) hidden layer. The Figure 1: Neural net with virtual input layer To simplify the description, and without loss of generality, we discuss a network architecture having No = I input nodes, Nr = V virtual input nodes, one hidden layer with NZ = H nodes, and N3 = 0 output nodes. The goal in the learning process is to determine the synaptic interconnection matrices (W, , WHO}. This will be achieved by minimizing (some norm of) the difference between the output values calculated by the net and the target outputs. In our approach, we decouple the nonlinearities of the transfer functions at each layer from the linear interlayer pattern propagation. This paradigm follows the seminal observations of Biegler-K&rig and Barman [3], and Tam and Chow [4]. The essence of their approach was to minimize, solving a sequence of least squares problems, the learning error on each layer separately, rather than globally for the entire network. Since for most practical applications the number of training examples exceeds the dimensionality of the training patterns, approximate results are typically obtained. Our proposed algorithm, on the other hand, implements a sequence of alternating direction singular value decompositions (SVD), on a network architecture having a monotonically decreasing number of nodes in successive layers. We modify the traditional neural network architecture by introducing a virtual input layer connected to the input nodes through a nonlinear transfer function. The virtual layer acts as a preprocessor of the input vectors, and replaces a highly over-determined linear system with a uniquely solvable one. 3. Network Training The learning algorithm begins at the output layer. A full iteration consists of a backward pattern propagation to the virtual layer, followed by a forward sweep back to the output layer. Our notation is as follows. A bar indicates a quantity calculated by the network (as opposed to a “target” or initialized quantity); a superscript f denotes a forward calculation. Each node implements a nonlinear transfer function Q : %+(O,l), typically a sigmoid. Since Q, is bijective, the inverse p -’ is well defined. The K seismic signatures used for training are stored as rows of matrices; the number of columns of each matrix equals the number of nodes of the corresponding processing layer. For convenience, the matrix dimensions are explicitly indicated as subscripts. The following two equations relate the presynaptic inputs T to the postsynaptic outputs S at the output layer.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Application of Artificial Neural Networks and Support Vector Machines for carbonate pores size estimation from 3D seismic data

This paper proposes a method for the prediction of pore size values in hydrocarbon reservoirs using 3D seismic data. To this end, an actual carbonate oil field in the south-western part ofIranwas selected. Taking real geological conditions into account, different models of reservoir were constructed for a range of viable pore size values.  Seismic surveying was performed next on these models. F...

متن کامل

Assessment of Different Training Methods in an Artificial Neural Network to Calculate 2D Dose Distribution in Radiotherapy

Introduction: Treatment planning is the most important part of treatment. One of the important entries into treatment planning systems is the beam dose distribution data which maybe typically measured or calculated in a long time. This study aimed at shortening the time of dose calculations using artificial neural network (ANN) and finding the best method of training t...

متن کامل

طبقه بندی و شناسایی رخساره‌های زمین‌شناسی با استفاده از داده‌های لرزه نگاری و شبکه‌های عصبی رقابتی

Geological facies interpretation is essential for reservoir studying. The method of classification and identification seismic traces is a powerful approach for geological facies classification and distinction. Use of neural networks as classifiers is increasing in different sciences like seismic. They are computer efficient and ideal for patterns identification. They can simply learn new algori...

متن کامل

ESTIMATING THE VULNERABILITY OF THE CONCRETE MOMENT RESISTING FRAME STRUCTURES USING ARTIFICIAL NEURAL NETWORKS

Heavy economic losses and human casualties caused by destructive earthquakes around the world clearly show the need for a systematic approach for large scale damage detection of various types of existing structures. That could provide the proper means for the decision makers for any rehabilitation plans. The aim of this study is to present an innovative method for investigating the seismic vuln...

متن کامل

APPLICATION OF NEURAL NETWORK IN EVALUATION OF SEISMIC CAPACITY FOR STEEL STRUCTURES UNDER CRITICAL SUCCESSIVE EARTHQUAKES

Depending on the tectonic activities, most buildings subject to multiple earthquakes, while a single design earthquake is suggested in most seismic design codes. Perhaps, the lack of easy assessment to second shock information and sometimes use of inappropriate methods in estimating these features cause successive earthquakes mainly were ignored in the analysis procedure. In order to overcome t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999